118 research outputs found
(pseudo)Scalar mesons in a self-consistent NJL model
In this study, we investigate the mass spectrum of and mesons
at finite chemical potential using the self-consistent NJL model and the
Fierz-transformed interaction Lagrangian. The model introduces an arbitrary
parameter to reflect the weights of the Fierz-transformed interaction
channels. We show that when exceeds a certain threshold value, the
chiral phase transition transforms from a first-order one to a smooth
crossover, which is evident from the behaviors of the chiral condensates and
meson masses. Additionally, at high chemical potential, the smaller the value
of , the higher the masses of the and mesons become.
Moreover, the Mott and dissociation chemical potentials both increase with the
increase in . Thus, the meson mass emerges as a valuable experimental
observable for determining the value of and investigating the
properties of the chiral phase transition in dense QCD matter.Comment: Accepted by Chinese Physics
The asymmetric effect of infectious disease equity market volatility for the physical education economy: implication for a post-Covid world
Due to the growing importance of the sports economy and the
severe impact of the current Covid-19 pandemic on it, this paper
examines the way in which the infectious disease stock market
volatility (ID-EMV) tracker affects the Covid world sports economy
from an asymmetrical perspective. We selected the newspaperbased ID-EMV index and Wind Physical Education Concept Index
(PEC) for our research. First, the results of conventional causality
tests showed that the tests ID-EMV and PEC were unable to
detect causality, implying that stock market volatility stemming
from COVID-19 risk had no impact on the sports economy.
However, considering potential asymmetric effects in this relationship, we further investigated whether ID-EMV could significantly
affect PEC under both positive and negative shocks. The empirical
results confirm the existence of asymmetric effects. Therefore, we
are the first to focus on this asymmetric effect and conduct
empirical research, which may help provide educators and financial market participants with a novel research perspective
Recommended from our members
Teaching the Art of Computer Programming at a Distance by Generating Dialogues using Deep Neural Networks
While teaching the art of Computer Programming, students with visual impairments (VI) are disadvantaged, because speech is their preferred modality. Existing accessibility assistants can only read out predefined texts sequentially, word-for-word, sentence-for-sentence, whilst the presentations of programming concepts could be conveyed in a more structured way. Earlier we have shown that deep neural networks such as Tree-Based Convolutional Neural Networks (TBCNN) and Gated Graph Neural Networks (GGNN) can be used to classify algorithms across different programming languages with over 90% accuracy. Furthermore, TBCNN or GGNN have been shown useful for generating natural and conversational dialogues from natural language texts. In this paper, we propose a novel pedagogy called “Programming Assistant”, by creating a personal tutor that can respond to voice commands, which trigger an explanation of programming concepts, hands-free. We generate dialogues using DNNs, which substitute code with the names of algorithms characterising the programs, and we read aloud descriptions of the code. Furthermore, the application of the dialogue generation can be embodied into an Alexa Skill, which turns them into fully natural voices, forming the basis of a smart assistant to handle a large number of formative questions in teaching the Art of Computer Programming at a distance
Similarity of DMD gene deletion and duplication in the Chinese patients compared to global populations
© 2008 Wang et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licens
Federated NLP in Few-shot Scenarios
Natural language processing (NLP) sees rich mobile applications. To support
various language understanding tasks, a foundation NLP model is often
fine-tuned in a federated, privacy-preserving setting (FL). This process
currently relies on at least hundreds of thousands of labeled training samples
from mobile clients; yet mobile users often lack willingness or knowledge to
label their data. Such an inadequacy of data labels is known as a few-shot
scenario; it becomes the key blocker for mobile NLP applications.
For the first time, this work investigates federated NLP in the few-shot
scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and
prompt learning, we first establish a training pipeline that delivers
competitive accuracy when only 0.05% (fewer than 100) of the training data is
labeled and the remaining is unlabeled. To instantiate the workflow, we further
present a system FFNLP, addressing the high execution cost with novel designs.
(1) Curriculum pacing, which injects pseudo labels to the training workflow at
a rate commensurate to the learning progress; (2) Representational diversity, a
mechanism for selecting the most learnable data, only for which pseudo labels
will be generated; (3) Co-planning of a model's training depth and layer
capacity. Together, these designs reduce the training delay, client energy, and
network traffic by up to 46.0, 41.2 and 3000.0,
respectively. Through algorithm/system co-design, FFNLP demonstrates that FL
can apply to challenging settings where most training samples are unlabeled
Towards Practical Few-shot Federated NLP
Transformer-based pre-trained models have emerged as the predominant solution
for natural language processing (NLP). Fine-tuning such pre-trained models for
downstream tasks often requires a considerable amount of labeled private data.
In practice, private data is often distributed across heterogeneous mobile
devices and may be prohibited from being uploaded. Moreover, well-curated
labeled data is often scarce, presenting an additional challenge. To address
these challenges, we first introduce a data generator for federated few-shot
learning tasks, which encompasses the quantity and skewness of scarce labeled
data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a
prompt-based federated learning system that exploits abundant unlabeled data
for data augmentation. Our experiments indicate that AUG-FedPrompt can perform
on par with full-set fine-tuning with a limited amount of labeled data.
However, such competitive performance comes at a significant system cost.Comment: EuroSys23 worksho
- …